299 research outputs found
Health 4.0: How Digitisation Drives Innovation in the Healthcare Sector
Driven by networked Electronic Health Record systems, Artificial Intelligence, real-time data from wearable devices with an overlay of invisible user interfaces and improved analytics, a revolution is afoot in the healthcare industry. Over the next few years, it is likely to fundamentally change how healthcare is delivered and how the outcomes are measured. The focus on collaboration, coherence, and convergence will make healthcare more predictive and personalised. This revolution is called Health 4.0. Data portability allows patients and their physicians to access it anytime anywhere and enhanced analytics allows for differential diagnosis and medical responses that can be predictive, timely, and innovative. Health 4.0 allows the value of data more consistently and effectively. It can pinpoint areas of improvement and enable decisions that are more informed. What it also does is help move the entire healthcare industry from a system that is reactive and focused on fee-for-service to a system that is value-based, which measures outcomes and ensures proactive prevention (Thuemmler, Bai, 2017). In this paper, the authors discuss how digitisation is paving the way for data-driven innovation in the healthcare systems. They elaborate on the opportunities and challenges for all stakeholders involved and discuss how emerging technologies can help overcome the inherent rigidity of today’s healthcare ecosystem. Following on from this, the authors explain the importance of research on the actual design of smart healthcare products and product service systems of the future and the challenges faced from the viewpoint of design practice
Gradual Weisfeiler-Leman: Slow and Steady Wins the Race
The classical Weisfeiler-Leman algorithm aka color refinement is fundamental
for graph learning and central for successful graph kernels and graph neural
networks. Originally developed for graph isomorphism testing, the algorithm
iteratively refines vertex colors. On many datasets, the stable coloring is
reached after a few iterations and the optimal number of iterations for machine
learning tasks is typically even lower. This suggests that the colors diverge
too fast, defining a similarity that is too coarse. We generalize the concept
of color refinement and propose a framework for gradual neighborhood
refinement, which allows a slower convergence to the stable coloring and thus
provides a more fine-grained refinement hierarchy and vertex similarity. We
assign new colors by clustering vertex neighborhoods, replacing the original
injective color assignment function. Our approach is used to derive new
variants of existing graph kernels and to approximate the graph edit distance
via optimal assignments regarding vertex similarity. We show that in both
tasks, our method outperforms the original color refinement with only moderate
increase in running time advancing the state of the art
Design for Health 4.0: Exploration of a New Area
Driven by networked Electronic Health Record systems, Artificial Intelligence, real-time data from wearable devices with an overlay of invisible user interfaces and improved analytics, Health 4.0 is changing the healthcare industry. The focus on collaboration, coherence, and convergence that will make healthcare more predictive and personalised. Furthermore, Health 4.0 realises the value of data more consistently and effectively. It can pinpoint areas of improvement and enable more informed decisions. What it also does is help move the entire healthcare industry from a system that is reactive and focused on fee-for-service to a system that is value-based, which measures outcomes and ensures proactive prevention.
In this paper, the authors will first explore the realm of the emerging area of Health 4.0 and identify its opportunities and challenges. This includes understanding the relevant base technologies as well as the design principles for the realization of smart healthcare product, systems and product-service-systems of the future. Following on from there, the authors focus on the role of design in the specific context of healthcare
Approximating the Graph Edit Distance with Compact Neighborhood Representations
The graph edit distance is used for comparing graphs in various domains. Due
to its high computational complexity it is primarily approximated. Widely-used
heuristics search for an optimal assignment of vertices based on the distance
between local substructures. While faster ones only consider vertices and their
incident edges, leading to poor accuracy, other approaches require
computationally intense exact distance computations between subgraphs. Our new
method abstracts local substructures to neighborhood trees and compares them
using efficient tree matching techniques. This results in a ground distance for
mapping vertices that yields high quality approximations of the graph edit
distance. By limiting the maximum tree height, our method supports steering
between more accurate results and faster execution. We thoroughly analyze the
running time of the tree matching method and propose several techniques to
accelerate computation in practice. We use compressed tree representations,
recognize redundancies by tree canonization and exploit them via caching.
Experimentally we show that our method provides a significantly improved
trade-off between running time and approximation quality compared to existing
state-of-the-art approaches
EmbAssi: Embedding Assignment Costs for Similarity Search in Large Graph Databases
The graph edit distance is an intuitive measure to quantify the dissimilarity
of graphs, but its computation is NP-hard and challenging in practice. We
introduce methods for answering nearest neighbor and range queries regarding
this distance efficiently for large databases with up to millions of graphs. We
build on the filter-verification paradigm, where lower and upper bounds are
used to reduce the number of exact computations of the graph edit distance.
Highly effective bounds for this involve solving a linear assignment problem
for each graph in the database, which is prohibitive in massive datasets.
Index-based approaches typically provide only weak bounds leading to high
computational costs verification. In this work, we derive novel lower bounds
for efficient filtering from restricted assignment problems, where the cost
function is a tree metric. This special case allows embedding the costs of
optimal assignments isometrically into space, rendering efficient
indexing possible. We propose several lower bounds of the graph edit distance
obtained from tree metrics reflecting the edit costs, which are combined for
effective filtering. Our method termed EmbAssi can be integrated into existing
filter-verification pipelines as a fast and effective pre-filtering step.
Empirically we show that for many real-world graphs our lower bounds are
already close to the exact graph edit distance, while our index construction
and search scales to very large databases
Non-Redundant Graph Neural Networks with Improved Expressiveness
Message passing graph neural networks iteratively compute node embeddings by
aggregating messages from all neighbors. This procedure can be viewed as a
neural variant of the Weisfeiler-Leman method, which limits their expressive
power. Moreover, oversmoothing and oversquashing restrict the number of layers
these networks can effectively utilize. The repeated exchange and encoding of
identical information in message passing amplifies oversquashing. We propose a
novel aggregation scheme based on neighborhood trees, which allows for
controlling the redundancy by pruning branches of the unfolding trees
underlying standard message passing. We prove that reducing redundancy improves
expressivity and experimentally show that it alleviates oversquashing. We
investigate the interaction between redundancy in message passing and
redundancy in computation and propose a compact representation of neighborhood
trees, from which we compute node and graph embeddings via a neural tree
canonization technique. Our method is provably more expressive than the
Weisfeiler-Leman method, less susceptible to oversquashing than message passing
neural networks, and provides high classification accuracy on widely-used
benchmark datasets
- …